Careful with stats:
There really isn’t a sharp cutoff between reality and probability.  A friend of my youth once showed me a penny he had that was tails on both sides.  You could flip that puppy all day and it would always be tails.  After a few dozen tries the odds would become astronomical that there was something fishy about the coin.  Of course I didn’t do that.  I just turned it over and looked at the other side.  Yep.  It was tails, too.  But of course there was still a tiny chance that I had made an observational error.  With two of us looking that tiny chance became even tinier.  But it could never go away altogether.  So I am a bit troubled by a recent paper: Regina Nuzzo Statistical Errors NATURE vol. 506 no. 7487 February 13, 2014 page 150. 

The paper takes the position that statistics alone do not constitute proof.  There must be a plausible connection before one can say one event causes another.  I may have mentioned this earlier, but decades ago in medical school somebody did a study in which he gave an enormous number of random rather benign treatments to people with benign ulcers.  The standard in those days was to have the patient sip a little cream every couple of hours.  I shudder to think how much harm we did that way.  (The treatment goes back to Ancient Rome, and we were pretty sure we weren’t accomplishing anything, but there wasn’t anything better at the time.  And in recent years a bacterium called Pylorobacter has been shown to be at fault, and we give antibiotics.  In the study of which I speak licorice was found to improve ulcers.  Nobody bought it.  It didn’t seem plausible.  In fact when I looked at the numbers I thought, “So you tried 100 things and one of them worked with a statistical probability of p > 0.01.  That’s technical jargon for ‘the chance of this result happening by chance in the absence of any effect is 1 in 100.’  So what you got is exactly what you would expect if nothing worked at all.”  In those dim days this line of reasoning was not well worked out, but statistics has come along far enough so that it is possible to account for how many things you have tried.

But what, alas according to the article, has become more common is for people to do random searches of the literature in a field, find two things that correlate with a p value of less then .05 (that’s one chance in twenty, which hardly seems air tight proof but is considered acceptable) and then publish all the correlations as new insights, which of course they are not.  They are just noise.  And here is where the article makes its telling point.  All those articles find correlations at the p < .05 but not by much.   And that is of course precisely what you would expect of meaningless noise. 

Well and good.  The article points out a problem.  But they go on to say more-or-less that if the relationship is preposterous on the face of it, then you dismiss it out of hand.

I say no.  You go back and do a stronger statistical test.  (That means you try it more times.)  That’s what they should have done with the licorice before they breathed a word of it to anybody. 

Of course where this hits home is that I have presented and will continue to present evidence that kinship and fertility happen together or not at all in the long run.  But so far this is a statistical argument.  Even when the day comes when I can say, “That atom right there is the cause,” or words to that effect, it will still be a statistical phenomenon.  And I suppose it will continue to be dismissed out of hand.

There have been 98 visitors over the past month.

Home page